29 research outputs found

    Detecting COVID-19 in chest X-ray images

    Get PDF
    One reliable way of detecting coronavirus disease 2019 (COVID-19) is using a chest x-ray image due to its complications in the lung parenchyma. This paper proposes a solution for COVID-19 detection in chest x-ray images based on a convolutional neural network (CNN). This CNN-based solution is developed using a modified InceptionV3 as a backbone architecture. Self-attention layers are inserted to modify the backbone such that the number of trainable parameters is reduced and meaningful areas of COVID-19 in chest x-ray images are focused on a training process. The proposed CNN architecture is then learned to construct a model to classify COVID-19 cases from non-COVID-19 cases. It achieves sensitivity, specificity, and accuracy values of 93%, 96%, and 96%, respectively. The model is also further validated on the so-called other normal and abnormal, which are non-COVID-19 cases. Cases of other normal contain chest x-ray images of elderly patients with minimal fibrosis and spondylosis of the spine, whereas other abnormal cases contain chest x-ray images of tuberculosis, pneumonia, and pulmonary edema. The proposed solution could correctly classify them as non-COVID-19 with 92% accuracy. This is a practical scenario where non-COVID-19 cases could cover more than just a normal condition

    Automatic segmentation of kidney and liver tumors in CT images

    Get PDF
    Automatic segmentation of hepatic lesions in computed tomography (CT) images is a challenging task to perform due to heterogeneous, diffusive shape of tumors and complex background. To address the problem more and more researchers rely on assistance of deep convolutional neural networks (CNN) with 2D or 3D type architecture that have proven to be effective in a wide range of computer vision tasks, including medical image processing. In this technical report, we carry out research focused on more careful approach to the process of learning rather than on complex architecture of the CNN. We have chosen MICCAI 2017 LiTS dataset for training process and the public 3DIRCADb dataset for validation of our method. The proposed algorithm reached DICE score 78.8% on the 3DIRCADb dataset. The described method was then applied to the 2019 Kidney Tumor Segmentation (KiTS-2019) challenge, where our single submission achieved 96.38% for kidney and 67.38% for tumor Dice scores

    Recognizing Gaits on Spatio-Temporal Feature Domain

    No full text

    A direct method to self-calibrate a surveillance camera by observing a walking pedestrian

    No full text
    Recent efforts show that it is possible to calibrate a surveillance camera simply from observing a walking human. This procedure can be seen as a special application of the camera self-calibration technique. Several methods have been proposed along this line, but most of them have certain restrictions, such as require the human walking at a constant speed, or require two orthogonal lines marked on the ground, etc. This has hindered their applicability. In this paper we propose a new method that removes most of these restrictions. By clever uses of the cross-ratio relationship in projective geometry, our method shows it is possible to directly estimate a full 3 x 4 camera projection matrix without first decomposing it into physical parameters like focal-length, optical center, etc. Extensive experiments on real data show our algorithm performs well in real situations

    Gait recognition across various walking speeds using higher order shape configuration based on a differential composition model

    No full text
    Gait has been known as an effective biometric feature to identify a person at a distance. However, variation of walking speeds may lead to significant changes to human walking patterns. It causes many difficulties for gait recognition. A comprehensive analysis has been carried out in this paper to identify such effects. Based on the analysis, Procrustes shape analysis is adopted for gait signature description and relevant similarity measurement. To tackle the challenges raised by speed change, this paper proposes a higher order shape configuration for gait shape description, which deliberately conserves discriminative information in the gait signatures and is still able to tolerate the varying walking speed. Instead of simply measuring the similarity between two gaits by treating them as two unified objects, a differential composition model (DCM) is constructed. The DCM differentiates the different effects caused by walking speed changes on various human body parts. In the meantime, it also balances well the different discriminabilities of each body part on the overall gait similarity measurements. In this model, the Fisher discriminant ratio is adopted to calculate weights for each body part. Comprehensive experiments based on widely adopted gait databases demonstrate that our proposed method is efficient for cross-speed gait recognition and outperforms other state-of-the-art methods

    Multiple Views Gait Recognition using View Transformation Model Based on Optimized Gait Energy Image

    No full text
    Gait is one of well recognized biometrics that has been widely used for human identification. However, the current gait recognition might have difficulties due to viewing angle being changed. This is because the viewing angle under which the gait signature database was generated may not be the same as the viewing angle when the probe data are obtained. This paper proposes a new multi-view gait recognition approach which tackles the problems mentioned above. Being different from other approaches of same category, this new method creates a so called View Transformation Model (VTM) based on spatial-domain Gait Energy Image (GEI) by adopting Singular Value Decomposition (SVD) technique. To further improve the performance of the proposed VTM, Linear Discriminant Analysis (LDA) is used to optimize the obtained GEI feature vectors. When implementing SVD there are a few practical problems such as large matrix size and over-fitting. In this paper, reduced SVD is introduced to alleviate the effects caused by these problems. Using the generated VTM, the viewing angles of gallery gait data and probe gait data can be transformed into the same direction. Thus, gait signatures can be measured without difficulties. The extensive experiments show that the proposed algorithm can significantly improve the multiple view gait recognition performance when being compared to the similar methods in literature

    Pairwise Shape Configuration-based PSA for Gait Recognition under Small Viewing Angle Change

    No full text
    Two main components of Procrustes Shape Analysis (PSA) are adopted and adapted specifically to address gait recognition under small viewing angle change: 1) Procrustes Mean Shape (PMS) for gait signature description; 2) Procrustes Distance (PD) for similarity measurement. Pairwise Shape Configuration (PSC) is proposed as a shape descriptor in place of existing Centroid Shape Configuration (CSC) in conventional PSA. PSC can better tolerate shape change caused by viewing angle change than CSC. Small variation of viewing angle makes large impact only on global gait appearance. Without major impact on local spatio-temporal motion, PSC which effectively embeds local shape information can generate robust view-invariant gait feature. To enhance gait recognition performance, a novel boundary re-sampling process is proposed. It provides only necessary re-sampled points to PSC description. In the meantime, it efficiently solves problems of boundary point correspondence, boundary normalization and boundary smoothness. This re-sampling process adopts prior knowledge of body pose structure. Comprehensive experiment is carried out on the CASIA gait database. The proposed method is shown to significantly improve performance of gait recognition under small viewing angle change without additional requirements of supervised learning, known viewing angle and multi-camera system, when compared with other methods in literatures

    Gait Recognition Under Various Viewing Angles Based on Correlated Motion Regression

    No full text

    " ATTRIBUTE-BASED LEARNING FOR LARGE SCALE OBJECT CLASSIFICATION

    No full text
    ABSTRACT Scalability to large numbers of classes is an important challenge for multi-class classification. It can often be computationally infeasible at test phase when class prediction is performed by using every possible classifier trained for each individual class. This paper proposes an attribute-based learning method to overcome this limitation. First is to define attributes and their associations with object classes automatically and simultaneously. Such associations are learned based on greedy strategy under certain conditions. Second is to learn a classifier for each attribute instead of each class. Then, these trained classifiers are used to predict classes based on their attribute representations. The proposed method also allows trade-off between test-time complexity (which grows linearly with the number of attributes) and accuracy. Experiments based on Animals-with-Attributes and ILSVRC2010 datasets have shown that the performance of our method is promising when compared with the state-of-the-art
    corecore